46 - Recap Clip 8.8: Evaluating and Choosing the Best Hypothesis (Part 2) [ID:30449]
50 von 119 angezeigt

Just to summarize again,

we have an optimization process in which we can combat overfitting and so on.

But we have these iterative algorithms that can grow with size.

Using this validation phase,

which we can co-induct,

we actually know when to stop.

The next step we did was we made

a slightly better analysis than just looking at error rates.

Here, we're counting every error exactly the same way.

But in many situations, that's not realistic.

Some errors are worse than others.

So what you really want to do is you want to weight the errors.

The way we would do this in this course,

of course, is because we want to maximize utilities.

We want to weight the errors by loss in utility.

It's the obvious thing to do.

So that directly gets us to the loss function,

which really just measures the loss in utility.

That may actually be different for different types of errors.

You need a loss model.

Sometimes, you want to really have

drastically different losses depending on what the error is.

If you only look at the error rate,

the error rate, here's one error,

there is another error.

So the system wouldn't know which one to prefer,

which one to plan for,

to be conservative in which direction.

All of those questions you can model by maximizing this here.

Then, of course, on top of the loss function,

you can then have different ways of

looking at how to measure losses here.

What you are typically doing is essentially how you count the errors.

You can count them in a zero one way,

or you can count them quadratically,

or you can prefer small errors and so on.

That is then conflated into this loss function,

which we call the general loss given a loss measure here,

where we just weight the losses with

the prior probability of that loss actually occurring.

If we minimize that,

we get a good hypothesis.

If we have an algorithm that computes this here efficiently,

then that's a learning algorithm.

That is a learning algorithm that actually is oriented towards utility.

A thing we've convinced ourselves is important in our agents.

Of course, and you know the pattern,

we do some beautiful theory here,

and then I tell you, well,

it's all for nothing because we don't have the information we need.

Teil eines Kapitels:
Recaps

Zugänglich über

Offener Zugang

Dauer

00:09:40 Min

Aufnahmedatum

2021-03-30

Hochgeladen am

2021-03-31 11:17:56

Sprache

en-US

Recap: Evaluating and Choosing the Best Hypothesis (Part 2)

Main video on the topic in chapter 8 clip 8.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen